01. Introducing Jay Alammar

Hi! Luis again.

Hi! Luis again.

Hello! For this next section, I'd like to introduce you to Jay Alammar. Jay has done some great work in interactive explorations of neural networks. If you haven't already, make sure you check out his blog.

Jay will be teaching you about a particular RNN architecture called "sequence to sequence". In this case, you feed in a sequence of data and the network will output another sequence. This is typically used in problems such as machine translation, where you'd feed in a sentence in English and get out a sentence in Arabic.

Sequence to sequence will prepare you for the next section, Deep Learning Attention, also taught by Jay.